Search results for: gas turbine power plant
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9306

Search results for: gas turbine power plant

576 Acerola and Orange By-Products as Sources of Bioactive Compounds for Probiotic Fermented Milks

Authors: Tatyane Lopes de Freitas, Antonio Diogo S. Vieira, Susana Marta Isay Saad, Maria Ines Genovese

Abstract:

The fruit processing industries generate a large volume of residues to produce juices, pulps, and jams. These residues, or by-products, consisting of peels, seeds, and pulps, are routinely discarded. Fruits are rich in bioactive compounds, including polyphenols, which have positive effects on health. Dry residues from two fruits, acerola (M. emarginata D. C.) and orange (C. sinensis), were characterized in relation to contents of ascorbic acid, minerals, total dietary fibers, moisture, ash, lipids, proteins, and carbohydrates, and also high performance liquid chromatographic profile of flavonoids, total polyphenols and proanthocyanidins contents, and antioxidant capacity by three different methods (Ferric reducing antioxidant power assay-FRAP, Oxygen Radical Absorbance Capacity-ORAC, 1,1-diphenyl-2-picrylhydrazil (DPPH) radical scavenging activity). Acerola by-products presented the highest acid ascorbic content (605 mg/100 g), and better antioxidant capacity than orange by-products. The dry residues from acerola demonstrated high contents of proanthocyanidins (617 µg CE/g) and total polyphenols (2525 mg gallic acid equivalents - GAE/100 g). Both presented high total dietary fiber (above 60%) and protein contents (acerola: 10.4%; orange: 9.9%), and reduced fat content (acerola: 1.6%; orange: 2.6%). Both residues showed high levels of potassium, calcium, and magnesium, and were considered sources of these minerals. With acerola by-product, four formulations of probiotics fermented milks were produced: F0 (without the addition of acerola residue (AR)), F2 (2% AR), F5 (5% AR) and F10 (10% AR). The physicochemical characteristics of the fermented milks throughout of storage were investigated, as well as the impact of in vitro simulated gastrointestinal conditions on flavonoids and probiotics. The microorganisms analyzed maintained their populations around 8 log CFU/g during storage. After the gastric phase of the simulated digestion, the populations decreased, and after the enteric phase, no colonies were detected. On the other hand, the flavonoids increased after the gastric phase, maintaining or suffering small decrease after enteric phase. Acerola by-products powder is a valuable ingredient to be used in functional foods because is rich in vitamin C, fibers and flavonoids. These flavonoids appear to be highly resistant to the acids and salts of digestion.

Keywords: acerola, orange, by-products, fermented milk

Procedia PDF Downloads 113
575 Challenges of School Leadership

Authors: Stefan Ninković

Abstract:

The main purpose of this paper is to examine the different theoretical approaches and relevant empirical evidence and thus, recognize some of the most pressing challenges faced by school leaders. This paper starts from the fact that the new mission of the school is characterized by the need for stronger coordination among students' academic, social and emotional learning. In this sense, school leaders need to focus their commitment, vision and leadership on the issues of students' attitudes, language, cultural and social background, and sexual orientation. More specifically, they should know what a good teaching is for student’s at-risk, students whose first language is not dominant in school, those who’s learning styles are not in accordance with usual teaching styles, or who are stigmatized. There is a rather wide consensus around the fact that the traditionally popular concept of instructional leadership of the school principal is no longer sufficient. However, in a number of "pro-leadership" circles, including certain groups of academic researchers, consultants and practitioners, there is an established tendency of attributing school principal an extraordinary influence towards school achievements. On the other hand, the situation in which all employees in the school are leaders is a utopia par excellence. Although leadership obviously can be efficiently distributed across the school, there are few findings that speak about sources of this distribution and factors making it sustainable. Another idea that is not particularly new, but has only recently gained in importance is related to the fact that the collective capacity of the school is an important resource that often remains under-cultivated. To understand the nature and power of collaborative school cultures, it is necessary to know that these operate in a way that they make their all collective members' tacit knowledge explicit. In this sense, the question is how leaders in schools can shape collaborative culture and create social capital in the school. Pressure exerted on schools to systematically collect and use the data has been accompanied by the need for school leaders to develop new competencies. The role of school leaders is critical in the process of assessing what data are needed and for what purpose. Different types of data are important: test results, data on student’s absenteeism, satisfaction with school, teacher motivation, etc. One of the most important tasks of school leaders are data-driven decision making as well as ensuring transparency of the decision-making process. Finally, the question arises whether the existing models of school leadership are compatible with the current social and economic trends. It is necessary to examine whether and under what conditions schools are in need for forms of leadership that are different from those that currently prevail. Closely related to this issue is also to analyze the adequacy of different approaches to leadership development in the school.

Keywords: educational changes, leaders, leadership, school

Procedia PDF Downloads 316
574 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target

Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao

Abstract:

High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.

Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration

Procedia PDF Downloads 316
573 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 439
572 A Theoretical Approach of Tesla Pump

Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu

Abstract:

This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.

Keywords: lubrication, temperature, tesla-pump, viscosity

Procedia PDF Downloads 165
571 The Importance of Entrepreneurship for National Economy: Evaluation of Developed and Least Developed Countries

Authors: Adnan Celik

Abstract:

Entrepreneurs are people who attempt to do a business and do not hesitate to do so. They are involved in the production of economic goods and services through factors of production. They also find the financial resources necessary for production and the markets where the production will be evaluated. After all, they create economic values. The main function of the entrepreneur in contemporary societies is to realize innovations. From this point, the power of the modern entrepreneur is based on her/his capacity to innovate and transform his innovations into tangible commercial products. In this context, the concept of an entrepreneur is used to mean the person or persons who constantly innovate. Successful entrepreneurs take on the role of the locomotive in the development of their countries. They support economic development with their activities. In addition to production and marketing activities, it also has important contributions to employment. Along with the development of the country, they also try to make the income distribution more balanced. Especially developed country entrepreneurs intensely perform the following functions; “to produce new goods and services or to increase the quality and quality of known goods and services; ability to develop and apply new production methods; establishing new organizations in the industry; reach new markets; to find new sources from which raw materials and similar materials can be obtained”. Entrepreneurs who fully implement business functions are easier to achieve economic efficiency. Thus, they provide great advantages to the business and the national economy. Successful entrepreneurs are people who make money by creating economic values. These revenues are; on the one hand, it is distributed to individuals in the business as wages, premiums, or dividends; It is also used in the growth of companies. Thus, employees, managers, entrepreneurs and the whole country can benefit greatly. In the least developed countries, the guiding effect of traditional value patterns on individuals' attitudes and behaviors varies depending on the socio-economic characteristics of individuals. It is normal for an entrepreneur with a low level of education, who was brought up in a traditional structure, to behave in accordance with traditional value patterns. In fact, this is the primary problem of all countries in the development effort. The solution to this problem will be possible by giving the necessary importance to the social dimension as well as the technical dimension of development. This study mainly focuses on the importance of entrepreneurship for the national economy. This issue has been handled separately in terms of developed and least developed countries. As a result of the study, entrepreneurship suggestions were made, especially to least developed countries, with the goal of national economy and development.

Keywords: entrepreneur, entrepreneurship, national economy, entrepreneurship in developed and least developed countries

Procedia PDF Downloads 120
570 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation

Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan

Abstract:

Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.

Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method

Procedia PDF Downloads 138
569 Site Suitability of Offshore Wind Energy: A Combination of Geographic Referenced Information and Analytic Hierarchy Process

Authors: Ayat-Allah Bouramdane

Abstract:

Power generation from offshore wind energy does not emit carbon dioxide or other air pollutants and therefore play a role in reducing greenhouse gas emissions from the energy sector. In addition, these systems are considered more efficient than onshore wind farms, as they generate electricity from the wind blowing across the sea, thanks to the higher wind speed and greater consistency in direction due to the lack of physical interference that the land or human-made objects can present. This means offshore installations require fewer turbines to produce the same amount of energy as onshore wind farms. However, offshore wind farms require more complex infrastructure to support them and, as a result, are more expensive to construct. In addition, higher wind speeds, strong seas, and accessibility issues makes offshore wind farms more challenging to maintain. This study uses a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP) to identify the most suitable sites for offshore wind farm development in Morocco, with a particular focus on the Dakhla city. A range of environmental, socio-economic, and technical criteria are taken into account to solve this complex Multi-Criteria Decision-Making (MCDM) problem. Based on experts' knowledge, a pairwise comparison matrix at each level of the hierarchy is performed, and fourteen sub-criteria belong to the main criteria have been weighted to generate the site suitability of offshore wind plants and obtain an in-depth knowledge on unsuitable areas, and areas with low-, moderate-, high- and very high suitability. We find that wind speed is the most decisive criteria in offshore wind farm development, followed by bathymetry, while proximity to facilities, the sediment thickness, and the remaining parameters show much lower weightings rendering technical parameters most decisive in offshore wind farm development projects. We also discuss the potential of other marine renewable energy potential, in Morocco, such as wave and tidal energy. The proposed approach and analysis can help decision-makers and can be applied to other countries in order to support the site selection process of offshore wind farms.

Keywords: analytic hierarchy process, dakhla, geographic referenced information, morocco, multi-criteria decision-making, offshore wind, site suitability

Procedia PDF Downloads 132
568 Geothermal Resources to Ensure Energy Security During Climate Change

Authors: Debasmita Misra, Arthur Nash

Abstract:

Energy security and sufficiency enables the economic development and welfare of a nation or a society. Currently, the global energy system is dominated by fossil fuels, which is a non-renewable energy resource, which renders vulnerability to energy security. Hence, many nations have begun augmenting their energy system with renewable energy resources, such as solar, wind, biomass and hydro. However, with climate change, how sustainable are some of the renewable energy resources in the future is a matter of concern. Geothermal energy resources have been underexplored or underexploited in global renewable energy production and security, although it is gaining attractiveness as a renewable energy resource. The question is, whether geothermal energy resources are more sustainable than other renewable energy resources. High-temperature reservoirs (> 220 °F) can produce electricity from flash/dry steam plants as well as binary cycle production facilities. Most of the world’s high enthalpy geothermal resources are within the seismo-tectonic belt. However, exploration for geothermal energy is of great importance in conventional geothermal systems in order to improve its economic viability. In recent years, there has been an increase in the use and development of several exploration methods for geo-thermal resources, such as seismic or electromagnetic methods. The thermal infrared band of the Landsat can reflect land surface temperature difference, so the ETM+ data with specific grey stretch enhancement has been used to explore underground heat water. Another way of exploring for potential power is utilizing fairway play analysis for sites without surface expression and in rift zones. Utilizing this type of analysis can improve the success rate of project development by reducing exploration costs. Identifying the basin distribution of geologic factors that control the geothermal environment would help in identifying the control of resource concentration aside from the heat flow, thus improving the probability of success. The first step is compiling existing geophysical data. This leads to constructing conceptual models of potential geothermal concentrations which can then be utilized in creating a geodatabase to analyze risk maps. Geospatial analysis and other GIS tools can be used in such efforts to produce spatial distribution maps. The goal of this paper is to discuss how climate change may impact renewable energy resources and how could a synthesized analysis be developed for geothermal resources to ensure sustainable and cost effective exploitation of the resource.

Keywords: exploration, geothermal, renewable energy, sustainable

Procedia PDF Downloads 137
567 Feminising Football and Its Fandom: The Ideological Construction of Women's Super League

Authors: Donna Woodhouse, Beth Fielding-Lloyd, Ruth Sequerra

Abstract:

This paper explores the structure and culture of the English Football Association (FA) the governing body of soccer in England, in relation to the development of the FA Women’s Super League (WSL). In doing so, it examines the organisation’s journey from banning the sport in 1921 to establishing the country’s first semi professional female soccer league in 2011. As the FA has a virtual monopoly on defining the structures of the elite game, we attempted to understand its behaviour in the context of broader issues of power, control and resistance by giving voice to the experiences of those affected by its decisions. Observations were carried out at 39 matches over three years. Semi structured interviews with 17 people involved in the women’s game, identified via snowball sampling, were also carried out. Transcripts accompanied detailed field notes and were inductively coded to identify themes. What emerged was the governing body’s desire to create a new product, jettisoning the long history of the women’s game in order to shape and control the sport in a way it is no longer able to, with the elite male club game. The League created was also shaped by traditional conceptualisations of gender, in terms of the portrayal of its style of play and target audience, setting increased participation and spectatorship targets as measures of ‘success’. The national governing body has demonstrated pseudo inclusion and a lack of enthusiasm for the implementation of equity reforms, driven by a belief that the organisation is already representative, fair and accessible. Despite a consistent external pressure, the Football Association is still dominated at its most senior levels by males. Via claiming to hold a monopoly on expertise around the sport, maintaining complex committee structures and procedures, and with membership rules rooted in the amateur game, it remains a deeply gendered organisation, resistant to structural and cultural change. In WSL, the FA's structure and culture have created a franchise over which it retains almost complete control, dictating the terms of conditions of entry and marginalising alternative voices. The organisation presents a feminised version of both play and spectatorship, portraying the sport as a distinct, and lesser, version of soccer.

Keywords: football association, organisational culture, soccer, women’s super league

Procedia PDF Downloads 335
566 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions

Authors: Catherine Peyrols Wu

Abstract:

Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.

Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork

Procedia PDF Downloads 152
565 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 179
564 Dividend Policy in Family Controlling Firms from a Governance Perspective: Empirical Evidence in Thailand

Authors: Tanapond S.

Abstract:

Typically, most of the controlling firms are relate to family firms which are widespread and important for economic growth particularly in Asian Pacific region. The unique characteristics of the controlling families tend to play an important role in determining the corporate policies such as dividend policy. Given the complexity of the family business phenomenon, the empirical evidence has been unclear on how the families behind business groups influence dividend policy in Asian markets with the prevalent existence of cross-shareholdings and pyramidal structure. Dividend policy as one of an important determinant of firm value could also be implemented in order to examine the effect of the controlling families behind business groups on strategic decisions-making in terms of a governance perspective and agency problems. The purpose of this paper is to investigate the impact of ownership structure and concentration which are influential internal corporate governance mechanisms in family firms on dividend decision-making. Using panel data and constructing a unique dataset of family ownership and control through hand-collecting information from the nonfinancial companies listed in Stock Exchange of Thailand (SET) between 2000 and 2015, the study finds that family firms with large stakes distribute higher dividends than family firms with small stakes. Family ownership can mitigate the agency problems and the expropriation of minority investors in family firms. To provide insight into the distinguish between ownership rights and control rights, this study examines specific firm characteristics including the degrees of concentration of controlling shareholders by classifying family ownership in different categories. The results show that controlling families with large deviation between voting rights and cash flow rights have more power and affect lower dividend payment. These situations become worse when second blockholders are families. To the best knowledge of the researcher, this study is the first to examine the association between family firms’ characteristics and dividend policy from the corporate governance perspectives in Thailand with weak investor protection environment and high ownership concentration. This research also underscores the importance of family control especially in a context in which family business groups and pyramidal structure are prevalent. As a result, academics and policy makers can develop markets and corporate policies to eliminate agency problem.

Keywords: agency theory, dividend policy, family control, Thailand

Procedia PDF Downloads 266
563 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving

Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem

Abstract:

This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.

Keywords: energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability

Procedia PDF Downloads 285
562 Degradation and Detoxification of Tetracycline by Sono-Fenton and Ozonation

Authors: Chikang Wang, Jhongjheng Jian, Poming Huang

Abstract:

Among a wide variety of pharmaceutical compounds, tetracycline antibiotics are one of the largest groups of pharmaceutical compounds extensively used in human and veterinary medicine to treat and prevent bacterial infections. Because it is water soluble, biologically active, stable and bio-refractory, release to the environment threatens aquatic life and increases the risk posed by antibiotic-resistant pathogens. In practice, due to its antibacterial nature, tetracycline cannot be effectively destructed by traditional biological methods. Hence, in this study, two advanced oxidation processes such as ozonation and sono-Fenton processes were conducted individually to degrade the tetracycline for investigating their feasibility on tetracycline degradation. Effect of operational variables on tetracycline degradation, release of nitrogen and change of toxicity were also proposed. Initial tetracycline concentration was 50 mg/L. To evaluate the efficiency of tetracycline degradation by ozonation, the ozone gas was produced by an ozone generator (Model LAB2B, Ozonia) and introduced into the reactor with different flows (25 - 500 mL/min) at varying pH levels (pH 3 - pH 11) and reaction temperatures (15 - 55°C). In sono-Fenton system, an ultrasonic transducer (Microson VCX 750, USA) operated at 20 kHz combined with H₂O₂ (2 mM) and Fe²⁺ (0.2 mM) were carried out at different pH levels (pH 3 - pH 11), aeration gas and flows (air and oxygen; 0.2 - 1.0 L/min), tetracycline concentrations (10 - 200 mg/L), reaction temperatures (15 - 55°C) and ultrasonic powers (25 - 200 Watts), respectively. Sole ultrasound was ineffective on tetracycline degradation, where the degradation efficiencies were lower than 10% with 60 min reaction. Contribution of Fe²⁺ and H₂O₂ on the degradation of tetracycline was significant, where the maximum tetracycline degradation efficiency in sono-Fenton process was as high as 91.3% followed by 45.8% mineralization. Effect of initial pH level on tetracycline degradation was insignificant from pH 3 to pH 6 but significantly decreased as the pH was greater than pH 7. Increase of the ultrasonic power was slightly increased the degradation efficiency of tetracycline, which indicated that the hydroxyl radicals dominated the oxidation of tetracycline. Effects of aeration of air or oxygen with different flows and reaction temperatures were insignificant. Ozonation showed better efficiencies in tetracycline degradation, where the optimum reaction condition was found at pH 3, 100 mL O₃/min and 25°C with 94% degradation and 60% mineralization. The toxicity of tetracycline was significantly decreased due to the mineralization of tetracycline. In addition, less than 10% of nitrogen content was released to solution phase as NH₃-N, and the most degraded tetracycline cannot be full mineralized to CO₂. The results shown in this study indicated that both the sono-Fenton process and ozonation can effectively degrade the tetracycline and reduce its toxicity at profitable condition. The costs of two systems needed to be further investigated to understand the feasibility in tetracycline degradation.

Keywords: degradation, detoxification, mineralization, ozonation, sono-Fenton process, tetracycline

Procedia PDF Downloads 249
561 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air

Authors: Tobias Schnabel

Abstract:

Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.

Keywords: naphthalene, titandioxide, indoor air, photocatalysis

Procedia PDF Downloads 130
560 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin

Authors: Ankush Raina, Ankush Anand

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: boundary lubrication, copper oxide, friction, nano diamond

Procedia PDF Downloads 106
559 The Decision-Making Process of the Central Banks of Brazil and India in Regional Integration: A Comparative Analysis of MERCOSUR and SAARC (2003-2014)

Authors: Andre Sanches Siqueira Campos

Abstract:

Central banks can play a significant role in promoting regional economic and monetary integration by strengthening the payment and settlement systems. However, close coordination and cooperation require facilitating the implementation of reforms at domestic and cross-border levels in order to benchmark with international standards and commitments to the liberal order. This situation reflects the normative power of the regulatory globalization dimension of strong states, which may drive or constrain regional integration. In the MERCOSUR and SAARC regions, central banks have set financial initiatives that could facilitate South America and South Asia regions to move towards convergence integration and facilitate trade and investments connectivities. This is qualitative method research based on a combination of the Process-Tracing method with Qualitative Comparative Analysis (QCA). This research approaches multiple forms of data based on central banks, regional organisations, national governments, and financial institutions supported by existing literature. The aim of this research is to analyze the decision-making process of the Central Bank of Brazil (BCB) and the Reserve Bank of India (RBI) towards regional financial cooperation by identifying connectivity instruments that foster, gridlock, or redefine cooperation. The BCB and The RBI manage the monetary policy of the largest economies of those regions, which makes regional cooperation a relevant framework to understand how they provide an effective institutional arrangement for regional organisations to achieve some of their key policies and economic objectives. The preliminary conclusion is that both BCB and RBI demonstrate a reluctance to deepen regional cooperation because of the existing economic, political, and institutional asymmetries. Deepening regional cooperation is constrained by the interests of central banks in protecting their economies from risks of instability due to different degrees of development between countries in their regions and international financial crises that have impacted the international system in the 21st century. Reluctant regional integration also provides autonomy for national development and political ground for the contestation of Global Financial Governance by Brazil and India.

Keywords: Brazil, central banks, decision-making process, global financial governance, India, MERCOSUR, connectivity, payment system, regional cooperation, SAARC

Procedia PDF Downloads 95
558 Analysis of Waterjet Propulsion System for an Amphibious Vehicle

Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian

Abstract:

This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.

Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion

Procedia PDF Downloads 204
557 Antimicrobial and Antioxidant Activities of Actinobacteria Isolated from the Pollen of Pinus sylvestris Grown on the Lake Baikal Shore

Authors: Denis V. Axenov-Gribanov, Irina V. Voytsekhovskaya, Evgenii S. Protasov, Maxim A. Timofeyev

Abstract:

Isolated ecosystems existing under specific environmental conditions have been shown to be promising sources of new strains of actinobacteria. The taiga forest of Baikal Siberia has not been well studied, and its actinobacterial population remains uncharacterized. The proximity between the huge water mass of Lake Baikal and high mountain ranges influences the structure and diversity of the plant world in Siberia. Here, we report the isolation of eighteen actinobacterial strains from male cones of Pinus sylvestris trees growing on the shore of the ancient Lake Baikal in Siberia. The actinobacterial strains were isolated on solid nutrient MS media and Czapek agar supplemented with cycloheximide and phosphomycin. Identification of actinobacteria was carried out by 16S rRNA gene sequencing and further analysis of the evolutionary history. Four different liquid and solid media (NL19, DNPM, SG and ISP) were tested for metabolite production. The metabolite extracts produced by the isolated strains were tested for antibacterial and antifungal activities. Also, antiradical activity of crude extracts was carried out. Strain Streptomyces sp. IB 2014 I 74-3 that active against Gram-negative bacteria was selected for dereplication analysis with using the high-yield liquid chromatography with mass-spectrometry. Mass detection was performed in both positive and negative modes, with the detection range set to 160–2500 m/z. Data were collected and analyzed using Bruker Compass Data Analysis software, version 4.1. Dereplication was performed using the Dictionary of Natural Products (DNP) database version 6.1 with the following search parameters: accurate molecular mass, absorption spectra and source of compound isolation. Thus, in addition to more common representative strains of Streptomyces, several species belonging to the genera Rhodococcus, Amycolatopsis, and Micromonospora were isolated. Several of the selected strains were deposited in the Russian Collection of Agricultural Microorganisms (RCAM), St. Petersburg, Russia. All isolated strains exhibited antibacterial and antifungal activities. We identified several strains that inhibited the growth of the pathogen Candida albicans but did not hinder the growth of Saccharomyces cerevisiae. Several isolates were active against Gram-positive and Gram-negative bacteria. Moreover, extracts of several strains demonstrated high antioxidant activity. The high proportion of biologically active strains producing antibacterial and specific antifungal compounds may reflect their role in protecting pollen against phytopathogens. Dereplication of the secondary metabolites of the strain Streptomyces sp. IB 2014 I 74-3 was resulted in the fact that a total of 59 major compounds were detected in the culture liquid extract of strain cultivated in ISP medium. Eight compounds were preliminarily identified based on characteristics described in the Dictionary of Natural Products database, using the search parameters Streptomyces sp. IB 2014 I 74-3 was found to produce saframycin A, Y3 and S; 2-amino-3-oxo-3H-phenoxazine-1,8-dicarboxylic acid; galtamycinone; platencin A4-13R and A4-4S; ganefromycin d1; the antibiotic SS 8201B; and streptothricin D, 40-decarbamoyl, 60-carbamoyl. Moreover, forty-nine of the 59 compounds detected in the extract examined in the present study did not result in any positive hits when searching within the DNP database and could not be identified based on available mass-spec data. Thus, these compounds might represent new findings.

Keywords: actinobacteria, Baikal Lake, biodiversity, male cones, Pinus sylvestris

Procedia PDF Downloads 211
556 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 326
555 An Anthropometric Index Capable of Differentiating Morbid Obesity from Obesity and Metabolic Syndrome in Children

Authors: Mustafa Metin Donma

Abstract:

Circumference measurements are important because they are easily obtained values for the identification of the weight gain without determining body fat. They may give meaningful information about the varying stages of obesity. Besides, some formulas may be derived from a number of body circumference measurements to estimate body fat. Waist (WC), hip (HC) and neck (NC) circumferences are currently the most frequently used measurements. The aim of this study was to develop a formula derived from these three anthropometric measurements, each giving a valuable information independently, to question whether their combined power within a formula was capable of being helpful for the differential diagnosis of morbid obesity without metabolic syndrome (MetS) from MetS. One hundred and eighty seven children were recruited from the pediatrics outpatient clinic of Tekirdag Namik Kemal University Faculty of Medicine. The parents of the participants were informed about asked to fill and sign the consent forms. The study was carried out according to the Helsinki Declaration. The study protocol was approved by the institutional non-interventional ethics committee. The study population was divided into four groups as normal-body mass index (N-BMI), obese (OB), morbid obese (MO) and MetS, which were composed of 35, 44, 75 and 33 children, respectively. Age- and gender-adjusted BMI percentile values were used for the classification of groups. The children in MetS group were selected based upon the nature of the MetS components described as MetS criteria. Anthropometric measurements, laboratory analysis and statistical evaluation confined to study population were performed. Body mass index values were calculated. A circumference index, advanced Donma circumference index (ADCI) was introduced as WC*HC/NC. The statistical significance degree was chosen as p value smaller than 0.05. Body mass index values were 17.7±2.8, 24.5±3.3, 28.8±5.7, 31.4±8.0 kg/m2, for N-BMI, OB, MO, MetS groups, respectively. The corresponding values for ADCI were 165±35, 240±42, 270±55, and 298±62. Significant differences were obtained between BMI values of N-BMI and OB, MO, MetS groups (p=0.001). Obese group BMI values also differed from MO group BMI values (p=0.001). However, the increase in MetS group compared to MO group was not significant (p=0.091). For the new index, significant differences were obtained between N-BMI and OB, MO, MetS groups (p=0.001). Obese group ADCI values also differed from MO group ADCI values (p=0.015). A significant difference between MO and MetS groups was detected (p=0.043). The correlation coefficient value and the significance check of the correlation was found between BMI and ADCI as r=0.0883 and p=0.001 upon consideration of all participants. In conclusion, in spite of the strong correlation between BMI and ADCI values obtained when all groups were considered, ADCI, but not BMI, was the index, which was capable of differentiating cases with morbid obesity from cases with morbid obesity and MetS.

Keywords: anthropometry, body mass index, child, circumference, metabolic syndrome, obesity

Procedia PDF Downloads 52
554 Comparative Analysis of the Antioxidant Capacities of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)

Authors: Soo Im Chung, Lara Marie Pangan Lo, Yao Cheng Zhang, Su Jin Nam, Xingyue Jin, Mi Young Kang

Abstract:

Rice (Oryza sativa L.) is one of the most widely consumed grains. Due to the growing number of demand as a potential functional food and nutraceutical source and the increasing awareness of people towards healthy diet and good quality of living, more researches dwell upon the development of new rice cultivars for population consumption. However, studies on the antioxidant capacities of newly developed rice were limited as well as the effects of germination in these rice cultivars. Therefore, this study aimed to focus on analysis of the antioxidant potential of pre-germinated and germinated pigmented rice cultivars in South Korea such as purple cultivar Superjami (SJ) and red cultivar Super hongmi (SH) in comparison with the non-pigmented Normal Brown (NB) Rice. The powdered rice grain samples were extracted with 80% methanol and their antioxidant activities were determined. The Results showed that pre-germinated pigmented rice cultivars have higher Fe2+ Chelating Ability (Fe2+), Reducing Power (RP), 2,2´-azinobis[3-ethylbenzthiazoline]-6-sulfonic acid (ABTS) radical scavenging and Superoxide Dismutase activity than the control NB rice. Moreover, it is revealed that germination process induced a significant increased in the antioxidant activities of all the rice samples regardless of their strains. Purple rice SJ showed greater Fe2+ (88.82 + 0.53%), RP (0.82 + 0.01) , ABTS (143.63 + 2.38 mg VCEAC/100 g) and SOD (59.31 + 0.48%) activities than the red grain SH and the control NB having the lowest antioxidant potential among the three (3) rice samples examined. The Effective concentration at 50% (EC50) of 1, 1-Diphenyl-2-picrylhydrazyl (DPPH) and Hydroxyradical (-OH) Scavenging activity for the rice samples were also obtained. SJ showed lower EC50 in terms of its DPPH (3.81 + 0.15 mg/mL) and –OH (5.19 + 0.08 mg/mL) radical scavenging activities than the red grain SH and control NB rice indicating that at lower concentrations, it can readily exhibit antioxidant effects against reactive oxygen species (ROS). These results clearly suggest the higher antioxidant potential of pigmented rice varieties as compared with the widely consumed NB rice. Also, it is revealed in the study that even at lower concentrations, pigmented rice varieties can exhibit their antioxidant activities. Germination process further enhanced the antioxidant capacities of the rice samples regardless of their types. With these results at hand, these new rice varieties can be further developed as a good source of bio functional elements that can help alleviate the growing number of cases of metabolic disorders.

Keywords: antioxidant capacity, germinated rice, pigmented rice, super hongmi, superjami

Procedia PDF Downloads 425
553 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others

Authors: Randall E. Osborne

Abstract:

In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.

Keywords: globalization, international psychology, society and culture, teaching interculturally

Procedia PDF Downloads 232
552 A Novel Harmonic Compensation Algorithm for High Speed Drives

Authors: Lakdar Sadi-Haddad

Abstract:

The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.

Keywords: active harmonic compensation, eddy current losses, high speed machine

Procedia PDF Downloads 380
551 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 148
550 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 58
549 EcoTeka, an Open-Source Software for Urban Ecosystem Restoration through Technology

Authors: Manon Frédout, Laëtitia Bucari, Mathias Aloui, Gaëtan Duhamel, Olivier Rovellotti, Javier Blanco

Abstract:

Ecosystems must be resilient to ensure cleaner air, better water and soil quality, and thus healthier citizens. Technology can be an excellent tool to support urban ecosystem restoration projects, especially when based on Open Source and promoting Open Data. This is the goal of the ecoTeka application: one single digital tool for tree management which allows decision-makers to improve their urban forestry practices, enabling more responsible urban planning and climate change adaptation. EcoTeka provides city councils with three main functionalities tackling three of their challenges: easier biodiversity inventories, better green space management, and more efficient planning. To answer the cities’ need for reliable tree inventories, the application has been first built with open data coming from the websites OpenStreetMap and OpenTrees, but it will also include very soon the possibility of creating new data. To achieve this, a multi-source algorithm will be elaborated, based on existing artificial intelligence Deep Forest, integrating open-source satellite images, 3D representations from LiDAR, and street views from Mapillary. This data processing will permit identifying individual trees' position, height, crown diameter, and taxonomic genus. To support urban forestry management, ecoTeka offers a dashboard for monitoring the city’s tree inventory and trigger alerts to inform about upcoming due interventions. This tool was co-constructed with the green space departments of the French cities of Alès, Marseille, and Rouen. The third functionality of the application is a decision-making tool for urban planning, promoting biodiversity and landscape connectivity metrics to drive ecosystem restoration roadmap. Based on landscape graph theory, we are currently experimenting with new methodological approaches to scale down regional ecological connectivity principles to local biodiversity conservation and urban planning policies. This methodological framework will couple graph theoretic approach and biological data, mainly biodiversity occurrences (presence/absence) data available on both international (e.g., GBIF), national (e.g., Système d’Information Nature et Paysage) and local (e.g., Atlas de la Biodiversté Communale) biodiversity data sharing platforms in order to help reasoning new decisions for ecological networks conservation and restoration in urban areas. An experiment on this subject is currently ongoing with Montpellier Mediterranee Metropole. These projects and studies have shown that only 26% of tree inventory data is currently geo-localized in France - the rest is still being done on paper or Excel sheets. It seems that technology is not yet used enough to enrich the knowledge city councils have about biodiversity in their city and that existing biodiversity open data (e.g., occurrences, telemetry, or genetic data), species distribution models, landscape graph connectivity metrics are still underexploited to make rational decisions for landscape and urban planning projects. This is the goal of ecoTeka: to support easier inventories of urban biodiversity and better management of urban spaces through rational planning and decisions relying on open databases. Future studies and projects will focus on the development of tools for reducing the artificialization of soils, selecting plant species adapted to climate change, and highlighting the need for ecosystem and biodiversity services in cities.

Keywords: digital software, ecological design of urban landscapes, sustainable urban development, urban ecological corridor, urban forestry, urban planning

Procedia PDF Downloads 54
548 The Ideal Memory Substitute for Computer Memory Hierarchy

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.

Keywords: cache, memory-hierarchy, memory, registers, storage

Procedia PDF Downloads 142
547 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23

Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov

Abstract:

We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.

Keywords: corona, flares, solar activity, X-ray emission

Procedia PDF Downloads 330