Search results for: Hadamard coding scheme
82 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation
Authors: Jessica Cecchinelli, Amer Ali
Abstract:
The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction
Procedia PDF Downloads 42381 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 8880 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 7079 Climate Change Threats to UNESCO-Designated World Heritage Sites: Empirical Evidence from Konso Cultural Landscape, Ethiopia
Authors: Yimer Mohammed Assen, Abiyot Legesse Kura, Engida Esyas Dube, Asebe Regassa Debelo, Girma Kelboro Mensuro, Lete Bekele Gure
Abstract:
Climate change has posed severe threats to many cultural landscapes of UNESCO world heritage sites recently. The UNESCO State of Conservation (SOC) reports categorized flooding, temperature increment, and drought as threats to cultural landscapes. This study aimed to examine variations and trends of rainfall and temperature extreme events and their threats to the UNESCO-designated Konso Cultural Landscape in southern Ethiopia. The study used dense merged satellite-gauge station rainfall data (1981-2020) with spatial resolution of 4km by 4km and observed maximum and minimum temperature data (1987-2020). Qualitative data were also gathered from cultural leaders, local administrators, and religious leaders using structured interview checklists. The spatial patterns, coefficient of variation, standardized anomalies, trends, and magnitude of change of rainfall and temperature extreme events both at annual and seasonal levels were computed using the Mann-Kendall trend test and Sen’s slope estimator under the CDT package. The standard precipitation index (SPI) was also used to calculate drought severity, frequency, and trend maps. The data gathered from key informant interviews and focus group discussions were coded and analyzed thematically to complement statistical findings. Thematic areas that explain the impacts of extreme events on the cultural landscape were chosen for coding. The thematic analysis was conducted using Nvivo software. The findings revealed that rainfall was highly variable and unpredictable, resulting in extreme drought and flood. There were significant (P<0.05) increasing trends of heavy rainfall (R10mm and R20mm) and the total amount of rain on wet days (PRCPTOT), which might have resulted in flooding. The study also confirmed that absolute temperature extreme indices (TXx, TXn, and TNx) and the percentile-based temperature extreme indices (TX90p, TN90p, TX10p, and TN10P) showed significant (P<0.05) increasing trends which are signals for warming of the study area. The results revealed that the frequency as well as the severity of drought at 3-months (katana/hageya seasons) was more pronounced than the 12-months (annual) time scale. The highest number of droughts in 100 years is projected at a 3-months timescale across the study area. The findings also showed that frequent drought has led to loss of grasses which are used for making traditional individual houses and multipurpose communal houses (pafta), food insecurity, migration, loss of biodiversity, and commodification of stones from terrace. On the other hand, the increasing trends of rainfall extreme indices resulted in destruction of terraces, soil erosion, loss of life and damage of properties. The study shows that a persistent decline in farmland productivity, due to erratic and extreme rainfall and frequent drought occurrences, forced the local people to participate in non-farm activities and retreat from daily preservation and management of their landscape. Overall, the increasing rainfall and temperature extremes coupled with prevalence of drought are thought to have an impact on the sustainability of cultural landscape through disrupting the ecosystem services and livelihood of the community. Therefore, more localized adaptation and mitigation strategies to the changing climate are needed to maintain the sustainability of Konso cultural landscapes as a global cultural treasure and to strengthen the resilience of smallholder farmers.Keywords: adaptation, cultural landscape, drought, extremes indices
Procedia PDF Downloads 2478 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India
Authors: Zafar Tabrez, Nizamuddin Khan
Abstract:
Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern
Procedia PDF Downloads 31177 Elderly in Sub Saharan Africa
Authors: Obinna Benedict Duru
Abstract:
This study focuses on the elderly and the challenges that confront them. The elderly are that particular segment of our population who by virtue of the aging process have attained the stage in most cases where they are confronted with the challenges of economic dependency and social marginality. These challenges are as a result of the physical and biological decline occasioned by social myths and realities which portray the elderly as a dependent population whose members could not and should not work and who need social assistance that the younger population is obliged to provide. From the moment of birth to the moment of death, our bodies are constantly changing. We are all enmeshed in the process of growing old, a transition from youthfulness to elderliness. In youth-oriented modern societies like ours, we tend to attach positive importance and significance to the biological changes that occur early in life and define later physical changes in negative terms. Children growing up and young adults receive more attention, greater responsibilities and more legal rights to reward them on their way. But few people are congratulated on getting old. We commiserate with people who are getting old and make jokes about their supposedly physical, mental and biological decline. Wrinkles, loss of weight and vitality are all parts of the aging process. In almost all parts of the world, earlier researches have shown that about fifty percent of the elderly who suffer from stroke, arthritis, senility and other age related diseases are the disengaged and neglected elderly. Rapid technological changes render the knowledge and skills of the elderly obsolete; education is geared toward the young and the generational competition for jobs leads to pressures on the elderly to retire. Control of initial resources are shifted to the middle-aged and older workers are pushed into positions of economic dependency. This study therefore, among other things tend to discover how some government policies have affected the elderly particularly in Africa. To discover the prospects and possibilities of the elderly for a better living. To make a comparison of the advances in healthcare giving made in the advanced western societies to the practice in Sub Saharan Africa etc. The hypotheses of this study include: that the elderly in Sub Saharan Africa are more vulnerable than their counterparts in Europe and America. The elderly are more prone to social isolation, and that the elderly are mostly affected by age-related sickness etc. With a survey method as the research design, and sample size of about 500 respondents,probability sampling technique was used. Data which were analyzed using chi-square and tables were collected through primary and secondary sources. The findings made include: that the elderly suffer pains of old age especially when disengaged from work or social activity. That loss of income condemn the elderly to a life of vegetable existence, and that those who do not have other means of re-integration usually see old age with regret and despair. It is therefore, recommended among other things that social welfare scheme and the process of re-integration at old age be introduced for the non pensionable elderly in Africa.Keywords: elderly, social isolation, dependency, re-integration
Procedia PDF Downloads 33276 A Study of the Effect of the Flipped Classroom on Mixed Abilities Classes in Compulsory Secondary Education in Italy
Authors: Giacoma Pace
Abstract:
The research seeks to evaluate whether students with impairments can achieve enhanced academic progress by actively engaging in collaborative problem-solving activities with teachers and peers, to overcome the obstacles rooted in socio-economic disparities. Furthermore, the research underscores the significance of fostering students' self-awareness regarding their learning process and encourages teachers to adopt a more interactive teaching approach. The research also posits that reducing conventional face-to-face lessons can motivate students to explore alternative learning methods, such as collaborative teamwork and peer education within the classroom. To address socio-cultural barriers it is imperative to assess their internet access and possession of technological devices, as these factors can contribute to a digital divide. The research features a case study of a Flipped Classroom Learning Unit, administered to six third-year high school classes: Scientific Lyceum, Technical School, and Vocational School, within the city of Turin, Italy. Data are about teachers and the students involved in the case study, some impaired students in each class, level of entry, students’ performance and attitude before using Flipped Classrooms, level of motivation, family’s involvement level, teachers’ attitude towards Flipped Classroom, goal obtained, the pros and cons of such activities, technology availability. The selected schools were contacted; meetings for the English teachers to gather information about their attitude and knowledge of the Flipped Classroom approach. Questionnaires to teachers and IT staff were administered. The information gathered, was used to outline the profile of the subjects involved in the study and was further compared with the second step of the study made up of a study conducted with the classes of the selected schools. The learning unit is the same, structure and content are decided together with the English colleagues of the classes involved. The pacing and content are matched in every lesson and all the classes participate in the same labs, use the same materials, homework, same assessment by summative and formative testing. Each step follows a precise scheme, in order to be as reliable as possible. The outcome of the case study will be statistically organised. The case study is accompanied by a study on the literature concerning EFL approaches and the Flipped Classroom. Document analysis method was employed, i.e. a qualitative research method in which printed and/or electronic documents containing information about the research subject are reviewed and evaluated with a systematic procedure. Articles in the Web of Science Core Collection, Education Resources Information Center (ERIC), Scopus and Science Direct databases were searched in order to determine the documents to be examined (years considered 2000-2022).Keywords: flipped classroom, impaired, inclusivity, peer instruction
Procedia PDF Downloads 5275 A Cross Cultural Study of Jewish and Arab Listeners: Perception of Harmonic Sequences
Authors: Roni Granot
Abstract:
Musical intervals are the building blocks of melody and harmony. Intervals differ in terms of their size, direction, or quality as consonants or dissonants. In Western music, perceptual dissonance is mostly associated with the sensation of beats or periodicity, whereas cognitive dissonance is associated with rules of harmony and voice leading. These two perceptions can be studied separately in musical cultures which include melodic with little or no harmonic structures. In the Arab musical system, there is a number of different quarter- tone intervals creating various combinations of consonant and dissonant intervals. While traditional Arab music includes only melody, today’s Arab pop music includes harmonization of songs, often using typical Western harmonic sequences. Therefore, the Arab population in Israel presents an interesting case which enables us to examine the distinction between perceptual and cognitive dissonance. In the current study, we compared the responses of 34 Jewish Western listeners and 56 Arab listeners to two types of stimuli and their relationships: Harmonic sequences and isolated harmonic intervals (dyads). Harmonic sequences were presented in synthesized piano tones and represented five levels of Harmonic prototypicality (Tonic ending; Tonic ending with half flattened third; Deceptive cadence; Half cadence; and Dissonant unrelated ending) and were rated on 5-point scales of closure and surprise. Here we report only findings related to the harmonic sequences. One-way repeated measures ANOVA with one within subjects factor with five levels (Type of sequence) and one between- subjects factor (Musical background) indicates a main effect of Type of sequence for surprise ratings F (4, 85) = 51 p<.001, and for closure ratings F (4, 78) 9.54 p < .001, no main effect of Background on either surprise or closure ratings, and a marginally significant Type X Background interaction for surprise F (4, 352) = 6.05 p = .069 and closure ratings F (4, 324) 3.89 p < .01). Planned comparisons show that the interaction of Type of sequence X Background center around surprise and closure ratings of the regular versus the half- flattened third tonic and the deceptive versus the half cadence. The half- flattened third tonic is rated as less surprising and as demanding less continuation than the regular tonic by the Arab listeners as compared to the Western listeners. In addition, the half cadence is rated as more surprising but demanding less continuation than the deceptive cadence in the Arab listeners as compared to the Western listeners. Together, our results suggest that despite the vast exposure of Arab listeners to Western harmony, sensitivity to harmonic rules seems to be partial with preference to oriental sonorities such as half flattened third. In addition, the percept of directionality which demands sensitivity to the level on which closure is obtained and which is strongly entrenched in Western harmony, may not be fully integrated into the Arab listeners’ mental harmonic scheme. Results will be discussed in terms of broad differences between Western and Eastern aesthetic ideals.Keywords: harmony, cross cultural, Arab music, closure
Procedia PDF Downloads 27474 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame
Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin
Abstract:
The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.Keywords: FACTS, multi-space-time frame, optimal control, TCSC
Procedia PDF Downloads 26573 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 22672 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic
Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez
Abstract:
The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis
Procedia PDF Downloads 33971 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain
Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero
Abstract:
Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.Keywords: environmental externalities, intermodal transport, perishable food, transit time
Procedia PDF Downloads 9670 Understanding the Cause(S) of Social, Emotional and Behavioural Difficulties of Adolescents with ADHD and Its Implications for the Successful Implementation of Intervention(S)
Authors: Elisavet Kechagia
Abstract:
Due to the interplay of different genetic and environmental risk factors and its heterogeneous nature, the concept of attention deficit hyperactivity disorder (ADHD) has shaped controversy and conflicts, which have been, in turn, reflected in the controversial arguments about its treatment. Taking into account recent well evidence-based researches suggesting that ADHD is a condition, in which biopsychosocial factors are all weaved together, the current paper explores the multiple risk-factors that are likely to influence ADHD, with a particular focus on adolescents with ADHD who might experience comorbid social, emotional and behavioural disorders (SEBD). In the first section of this paper, the primary objective was to investigate the conflicting ideas regarding the definition, diagnosis and treatment of ADHD at an international level as well as to critically examine and identify the limitations of the two most prevailing sets of diagnostic criteria that inform current diagnosis, the American Psychiatric Association’s (APA) diagnostic scheme, DSM-V, and the World Health Organisation’s (WHO) classification of diseases, ICD-10. Taking into consideration the findings of current longitudinal studies on ADHD association with high rates of comorbid conditions and social dysfunction, in the second section the author moves towards an investigation of the transitional points −physical, psychological and social ones− that students with ADHD might experience during early adolescence, as informed by neuroscience and developmental contextualism theory. The third section is an exploration of the different perspectives of ADHD as reflected in individuals’ with ADHD self-reports and the KENT project’s findings on school staff’s attitudes and practices. In the last section, given the high rates of SEBDs in adolescents with ADHD, it is examined how cognitive behavioural therapy (CBT), coupled with other interventions, could be effective in ameliorating anti-social behaviours and/or other emotional and behavioral difficulties of students with ADHD. The findings of a range of randomised control studies indicate that CBT might have positive outcomes in adolescents with multiple behavioural problems, hence it is suggested to be considered both in schools and other community settings. Finally, taking into account the heterogeneous nature of ADHD, the different biopsychosocial and environmental risk factors that take place during adolescence and the discourse and practices concerning ADHD and SEBD, it is suggested how it might be possible to make sense of and meaningful improvements to the education of adolescents with ADHD within a multi-modal and multi-disciplinary whole-school approach that addresses the multiple problems that not only students with ADHD but also their peers might experience. Further research that would be based on more large-scale controls and would investigate the effectiveness of various interventions, as well as the profiles of those students who have benefited from particular approaches and those who have not, will generate further evidence concerning the psychoeducation of adolescents with ADHD allowing for generalised conclusions to be drawn.Keywords: adolescence, attention deficit hyperctivity disorder, cognitive behavioural theory, comorbid social emotional behavioural disorders, treatment
Procedia PDF Downloads 31769 Microbial Fuel Cells: Performance and Applications
Authors: Andrea Pietrelli, Vincenzo Ferrara, Bruno Allard, Francois Buret, Irene Bavasso, Nicola Lovecchio, Francesca Costantini, Firas Khaled
Abstract:
This paper aims to show some applications of microbial fuel cells (MFCs), an energy harvesting technique, as clean power source to supply low power device for application like wireless sensor network (WSN) for environmental monitoring. Furthermore, MFC can be used directly as biosensor to analyse parameters like pH and temperature or arranged in form of cluster devices in order to use as small power plant. An MFC is a bioreactor that converts energy stored in chemical bonds of organic matter into electrical energy, through a series of reactions catalysed by microorganisms. We have developed a lab-scale terrestrial microbial fuel cell (TMFC), based on soil that acts as source of bacteria and flow of nutrient and a lab-scale waste water microbial fuel cell (WWMFC), where waste water acts as flow of nutrient and bacteria. We performed large series of tests to exploit the capability as biosensor. The pH value has strong influence on the open circuit voltage (OCV) delivered from TMFCs. We analyzed three condition: test A and B were filled with same soil but changing pH from 6 to 6.63, test C was prepared using a different soil with a pH value of 6.3. Experimental results clearly show how with higher pH value a higher OCV was produced; indeed reactors are influenced by different values of pH which increases the voltage in case of a higher pH value until the best pH value of 7 is achieved. The influence of pH on OCV of lab-scales WWMFC was analyzed at pH value of 6.5, 7, 7.2, 7.5 and 8. WWMFCs are influenced from temperature more than TMFCs. We tested the power performance of WWMFCs considering four imposed values of ambient temperature. Results show how power performance increase proportionally with higher temperature values, doubling the output power from 20° to 40°. The best value of power produced from our lab-scale TMFC was equal to 310 μW using peaty soil, at 1KΩ, corresponding to a current of 0.5 mA. A TMFC can supply proper energy to low power devices of a WSN by means of the design of three stages scheme of an energy management system, which adapts voltage level of TMFC to those required by a WSN node, as 3.3V. Using a commercial DC/DC boost converter, that needs an input voltage of 700 mV, the current source of 0.5 mA, charges a capacitor of 6.8 mF until it will have accumulated an amount of charge equal to 700 mV in a time of 10 s. The output stage includes an output switch that close the circuit after a time of 10s + 1.5ms because the converter can boost the voltage from 0.7V to 3.3V in 1.5 ms. Furthermore, we tested in form of clusters connected in series up to 20 WWMFCs, we have obtained a high voltage value as output, around 10V, but low current value. MFC can be considered a suitable clean energy source to be used to supply low power devices as a WSN node or to be used directly as biosensor.Keywords: energy harvesting, low power electronics, microbial fuel cell, terrestrial microbial fuel cell, waste-water microbial fuel cell, wireless sensor network
Procedia PDF Downloads 20668 Searching Knowledge for Engagement in a Worker Cooperative Society: A Proposal for Rethinking Premises
Authors: Soumya Rajan
Abstract:
While delving into the heart of any organization, the structural pre-requisites which form the framework of its system, allures and sometimes invokes great interest. In an attempt to understand the ecosystem of Knowledge that existed in organizations with diverse ownership and legal blueprints, Cooperative Societies, which form a crucial part of the neo-liberal movement in India, was studied. The exploration surprisingly led to the re-designing of at least a set of premises of the researcher on the drivers of engagement in an otherwise structured trade environment. The liberal organizational structure of Cooperative Societies has been empowered with certain terminologies: Voluntary, Democratic, Equality and Distributive Justice. To condense in Hubert Calvert’ words, ‘Co-operation is a form of organization wherein persons voluntarily associated together as human beings on the basis of equality for the promotion of the economic interest of themselves.’ In India, largely the institutions which work under this principle is registered under Cooperative Societies Act of the Central or State laws. A Worker Cooperative Society which originated as a movement in the state of Kerala and spread its wings across the country - Indian Coffee House was chosen as the enterprise for further inquiry for it being a living example and a highly successful working model in the designated space. The exploratory study reached out to employees and key stakeholders of Indian Coffee House to understand the nuances of the structure and the scope it provides for engagement. The key questions which formed shape in the mind of researcher while engaging in the inquiry were: How has the organization sustained despite its principle of accepting employees with no skills into employment and later training and empowering them? How can a system which has pre-independence and post-independence (independence here means the colonial independence from Great Britain) existence seek to engage employees within the premise of equality? How was the value of socialism ingrained in a commercial enterprise which has a turnover of several hundreds of Crores each year? How did the vision of a flat structure, way back in the 1940’s find its way into the organizational structure and has continued to remain as the way of life? These questions were addressed by the Case study research that ensued and placing Knowledge as the key premise, the possibilities of engagement of the organization man was pictured. Understanding that although the macro or holistic unit of analysis is the organization, it is pivotal to understand the structures and processes which best reflect on the actors. The embedded design which was adopted in this study delivered insights from the different stakeholder actors from diverse departments. While moving through variables which define and sometimes defy bounds in rationality, the study brought to light the inherent features of the organization structure and how it influences the actors who form a crucial part of the scheme of things. The research brought forth the key enablers for engagement and specifically explored the standpoint of knowledge in the larger structure of the Cooperative Society.Keywords: knowledge, organizational structure, engagement, worker cooperative
Procedia PDF Downloads 23667 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies
Authors: Hai L. Tran
Abstract:
In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.Keywords: evidence, evidence forms, evidence types, taxonomy
Procedia PDF Downloads 6766 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 19765 Technology Assessment of the Collection of Cast Seaweed and Use as Feedstock for Biogas Production- The Case of SolrøD, Denmark
Authors: Rikke Lybæk, Tyge Kjær
Abstract:
The Baltic Sea is suffering from nitrogen and phosphorus pollution, which causes eutrophication of the maritime environment and hence threatens the biodiversity of the Baltic Sea area. The intensified quantity of nutrients in the water has created challenges with the growth of seaweed being discarded on beaches around the sea. The cast seaweed has led to odor problems hampering the use of beach areas around the Bay of Køge in Denmark. This is the case in, e.g., Solrød Municipality, where recreational activities have been disrupted when cast seaweed pile up on the beach. Initiatives have, however, been introduced within the municipality to remove the cast seaweed from the beach and utilize it for renewable energy production at the nearby Solrød Biogas Plant, thus being co-digested with animal manure for power and heat production. This paper investigates which type of technology application’s have been applied in the effort to optimize the collection of cast seaweed, and will further reveal, how the seaweed has been pre-treated at the biogas plant to be utilized for energy production the most efficient, hereunder the challenges connected with the content of sand. Heavy metal contents in the seaweed and how it is managed will also be addressed, which is vital as the digestate is utilized as soil fertilizer on nearby farms. Finally, the paper will outline the energy production scheme connected to the use of seaweed as feedstock for biogas production, as well as the amount of nitrogen-rich fertilizer produced. The theoretical approach adopted in the paper relies on the thinking of Circular Bio-Economy, where biological materials are cascaded and re-circulated etc., to increase and extend their value and usability. The data for this research is collected as part of the EU Interreg project “Cluster On Anaerobic digestion, environmental Services, and nuTrients removAL” (COASTAL Biogas), 2014-2020. Data gathering consists of, e.g., interviews with relevant stakeholders connected to seaweed collection and operation of the biogas plant in Solrød Municipality. It further entails studies of progress and evaluation reports from the municipality, analysis of seaweed digestion results from scholars connected to the research, as well as studies of scientific literature to supplement the above. Besides this, observations and photo documentation have been applied in the field. This paper concludes, among others, that the seaweed harvester technology currently adopted is functional in the maritime environment close to the beachfront but inadequate in collecting seaweed directly on the beach. New technology hence needs to be developed to increase the efficiency of seaweed collection. It is further concluded that the amount of sand transported to Solrød Biogas Plant with the seaweed continues to pose challenges. The seaweed is pre-treated for sand in a receiving tank with a strong stirrer, washing off the sand, which ends at the bottom of the tank where collected. The seaweed is then chopped by a macerator and mixed with the other feedstock. The wear down of the receiving tank stirrer and the chopper are, however, significant, and new methods should be adopted.Keywords: biogas, circular bio-economy, Denmark, maritime technology, cast seaweed, solrød municipality
Procedia PDF Downloads 29164 Investigating the Nature of Transactions Behind Violations Along Bangalore’s Lakes
Authors: Sakshi Saxena
Abstract:
Bangalore is an IT industry-based metropolitan city in the state of Karnataka in India. It has experienced tremendous urbanization at the expense of the environment. The reasons behind development over and near ecologically sensitive areas have been raised by several instances of disappearing lakes. Lakes in Bangalore can be considered commons on both a local and a regional scale and these water bodies are becoming less interconnected because of encroachment in the catchment area. Other sociocultural environmental risks that have led to social issues are now a source of concern. They serve as an example of the transformations in commons, a dilemma that as is transformed from rural to urban areas, as well as the complicated institutional issues associated with governance. According to some scholarly work and ecologists, a nexus of public and commercial institutions is primarily responsible for the depletion of water tanks and the inefficiency of the planning process. It is said that Bangalore's growth as an urban centre, together with the demands it created, particularly on land and water, resulted in the emergence of a middle and upper class that was demanding and self-assured. For the report in focus, it is evident to understand the issues and problems which led to these encroachments and captured violations if any around these lakes and tanks which arose during these decades. To claim watersheds and lake edges as properties, institutional arrangements (organizations, laws, and policies) intersect with planning authorities. Because of unregulated or indiscriminate forms of urbanization, it is claimed that the engagement of actors and negotiations of the process, including government ignorance, are allowing this problem to flourish. In general, the governance of natural resources in India is largely state-based. This is due to the constitutional scheme, which since the Government of India Act, of 1935 has in principle given the power to the states to legislate in this area. Thus, states have the exclusive power to regulate water supplies, irrigation and canals, drainage and embankments, water storage, hydropower, and fisheries. Thus, The main aim is to understand institutional arrangements and the master planning processes behind these arrangements. To understand the ambiguity through an example, it is noted that, Custodianship alone is a role divided between two state and two city-level bodies. This creates regulatory ambiguity and the effects on the environment are such as changes in city temperature, urban flooding, etc. As established, the main kinds of issues around lakes/tanks in Bangalore are encroachment and depletion. This study will further be enhanced by doing a physical survey of three of these lakes focusing on the Bellandur site and the stakeholders involved. According to the study's findings thus far, corrupt politicians and dubious land transaction tools are involved in the real estate industry. It appears that some destruction could have been stopped or at least mitigated in this case if there had been a robust system of urban planning processes involved along with strong institutional arrangements to protect lakes.Keywords: wetlands, lakes, urbanization, bangalore, politics, reservoirs, municipal jurisdiction, lake connections, institutions
Procedia PDF Downloads 7763 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 6862 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling
Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather
Abstract:
New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling
Procedia PDF Downloads 19061 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography
Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko
Abstract:
The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system
Procedia PDF Downloads 9960 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance
Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens
Abstract:
Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium
Procedia PDF Downloads 6559 Enhancement to Green Building Rating Systems for Industrial Facilities by Including the Assessment of Impact on the Landscape
Authors: Lia Marchi, Ernesto Antonini
Abstract:
The impact of industrial sites on people’s living environment both involves detrimental effects on the ecosystem and perceptual-aesthetic interferences with the scenery. These, in turn, affect the economic and social value of the landscape, as well as the wellbeing of workers and local communities. Given the diffusion of the phenomenon and the relevance of its effects, it emerges the need for a joint approach to assess and thus mitigate the impact of factories on the landscape –being this latest assumed as the result of the action and interaction of natural and human factors. However, the impact assessment tools suitable for the purpose are quite heterogeneous and mostly monodisciplinary. On the one hand, green building rating systems (GBRSs) are increasingly used to evaluate the performance of manufacturing sites, mainly by quantitative indicators focused on environmental issues. On the other hand, methods to detect the visual and social impact of factories on the landscape are gradually emerging in the literature, but they generally adopt only qualitative gauges. The research addresses the integration of the environmental impact assessment and the perceptual-aesthetic interferences of factories on the landscape. The GBRSs model is assumed as a reference since it is adequate to simultaneously investigate different topics which affect sustainability, returning a global score. A critical analysis of GBRSs relevant to industrial facilities has led to select the U.S. GBC LEED protocol as the most suitable to the scope. A revision of LEED v4 Building Design+Construction has then been provided by including specific indicators to measure the interferences of manufacturing sites with the perceptual-aesthetic and social aspects of the territory. To this end, a new impact category was defined, namely ‘PA - Perceptual-aesthetic aspects’, comprising eight new credits which are specifically designed to assess how much the buildings are in harmony with their surroundings: these investigate, for example the morphological and chromatic harmonization of the facility with the scenery or the site receptiveness and attractiveness. The credits weighting table was consequently revised, according to the LEED points allocation system. As all LEED credits, each new PA credit is thoroughly described in a sheet setting its aim, requirements, and the available options to gauge the interference and get a score. Lastly, each credit is related to mitigation tactics, which are drawn from a catalogue of exemplary case studies, it also developed by the research. The result is a modified LEED scheme which includes compatibility with the landscape within the sustainability assessment of the industrial sites. The whole system consists of 10 evaluation categories, which contain in total 62 credits. Lastly, a test of the tool on an Italian factory was performed, allowing the comparison of three mitigation scenarios with increasing compatibility level. The study proposes a holistic and viable approach to the environmental impact assessment of factories by a tool which integrates the multiple involved aspects within a worldwide recognized rating protocol.Keywords: environmental impact, GBRS, landscape, LEED, sustainable factory
Procedia PDF Downloads 11158 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada
Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone
Abstract:
Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.Keywords: cameras, monitoring, recreational fishing, stock assessment
Procedia PDF Downloads 12257 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances
Authors: Samuele Viaro, Pierre Ricco
Abstract:
Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition
Procedia PDF Downloads 25356 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 12355 The Effects of in vitro Digestion on Cheese Bioactivity; Comparing Adult and Elderly Simulated in vitro Gastrointestinal Digestion Models
Authors: A. M. Plante, F. O’Halloran, A. L. McCarthy
Abstract:
By 2050 it is projected that 2 billion of the global population will be more than 60 years old. Older adults have unique dietary requirements and aging is associated with physiological changes that affect appetite, sensory perception, metabolism, and digestion. Therefore, it is essential that foods recommended and designed for older adults promote healthy aging. To assess cheese as a functional food for the elderly, a range of commercial cheese products were selected and compared for their antioxidant properties. Cheese from various milk sources (bovine, goats, sheep) with different textures and fat content, including cheddar, feta, goats, brie, roquefort, halloumi, wensleydale and gouda, were initially digested with two different simulated in vitro gastrointestinal digestion (SGID) models. One SGID model represented a validated in vitro adult digestion system and the second model, an elderly SGID, was designed to consider the physiological changes associated with aging. The antioxidant potential of all cheese digestates was investigated using in vitro chemical-based antioxidant assays, (2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, ferric reducing antioxidant power (FRAP) and total phenolic content (TPC)). All adult model digestates had high antioxidant activity across both DPPH ( > 70%) and FRAP ( > 700 µM Fe²⁺/kg.fw) assays. Following in vitro digestion using the elderly SGID model, full-fat red cheddar, low-fat white cheddar, roquefort, halloumi, wensleydale, and gouda digestates had significantly lower (p ≤ 0.05) DPPH radical scavenging properties compared to the adult model digestates. Full-fat white cheddar had higher DPPH radical scavenging activity following elderly SGID digestion compared to the adult model digestate, but the difference was not significant. All other cheese digestates from the elderly model were comparable to the digestates from the adult model in terms of radical scavenging activity. The FRAP of all elderly digestates were significantly lower (p ≤ 0.05) compared to the adult digestates. Goats cheese was significantly higher (p ≤ 0.05) in FRAP (718 µM Fe²/kg.fw) compared to all other digestates in the elderly model. TPC levels in the soft cheeses (feta, goats) and low-fat cheeses (red cheddar, white cheddar) were significantly lower (p ≤ 0.05) in the elderly digestates compared to the adult digestates. There was no significant difference in TPC levels, between the elderly and adult model for full-fat cheddar (red, white), roquefort, wensleydale, gouda, and brie digestates. Halloumi cheese was the only cheese that was significantly higher in TPC levels following elderly digestion compared to adult digestates. Low fat red cheddar had significantly higher (p ≤ 0.05) TPC levels compared to all other digestates for both adult and elderly digestive systems. Findings from this study demonstrate that aging has an impact on the bioactivity of cheese, as antioxidant activity and TPC levels were lower, following in vitro elderly digestion compared to the adult model. For older adults, soft cheese, particularly goats cheese, was associated with high radical scavenging and reducing power, while roquefort cheese had low antioxidant activity. Also, elderly digestates of halloumi and low-fat red cheddar were associated with high TPC levels. Cheese has potential as a functional food for the elderly, however, bioactivity can vary depending on the cheese matrix. Funding for this research was provided by the RISAM Scholarship Scheme, Cork Institute of Technology, Ireland.Keywords: antioxidants, cheese, in-vitro digestion, older adults
Procedia PDF Downloads 22554 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 22953 Petrograpgy and Major Elements Chemistry of Granitic rocks of the Nagar Parkar Igneous Complex, Tharparkar, Sindh
Authors: Amanullah Lagharil, Majid Ali Laghari, M. Qasim, Jan. M., Asif Khan, M. Hassan Agheem
Abstract:
The Nagar Parkar area in southeastern Sindh is a part of the Thar Desert adjacent to the Runn of Kutchh, and covers 480 km2. It contains exposures of a variety of igneous rocks referred to as the Nagar Parkar Igneous Complex. The complex comprises rocks belonging to at least six phases of magmatism, from oldest to youngest: 1) amphibolitic basement rocks, 2) riebeckite-aegirine grey granite, 3) biotite-hornblende pink granite, 4) acid dykes, 5) rhyolite “plugs”, and basic dykes (Jan et al., 1997). The last three of these are not significant in volume. Radiometric dates are lacking but the grey and pink granites are petrographically comparable to the Siwana and Jalore plutons, respectively, emplaced in the Malani volcanic series. Based on these similarities and proximity, the phase 2 to 6 bodies in the Nagar Parkar may belong to the Late Proterozoic (720–745 Ma) Malani magmatism that covers large areas in western Rajasthan. Khan et al. (2007) have reported a 745 ±30 – 755 ±22 Ma U-Th-Pb age on monazite from the pink granite. The grey granite is essentially composed of perthitic feldspar (microperthite, mesoperthite), quartz, small amount of plagioclase and, characteristically, sodic minerals such as riebeckite and aegirine. A few samples lack aegirine. Fe-Ti oxide and minute, well-developed crystals of zircon occur in almost all the studied samples. Tourmaline, fluorite, apatite and rutile occur in only some samples and astrophyllite is rare. Allanite, sphene and leucoxene occur as minor accessories along with local epidote. The pink granite is mostly leucocratic, but locally rich in biotite (up to 7 %). It is essentially made up of microperthite and quartz, with local microcline, and minor plagioclase (albite-oligoclase). Some rocks contain sufficient oligoclase and can be called adamellite or quartz mozonite. Biotite and hornblende are main accessory minerals along with iron oxide, but in a few samples are without hornblende. Fayalitic olivine, zircon, sphene, apatite, tourmaline, fluorite, allanite and cassiterite occur as sporadic accessory minerals. Epidote, carbonate, sericite and muscovite are produced due to the alteration of feldspar. This work concerns the major element geochemistry and comparison of the principal granitic rocks of Nagar Parkar. According to the scheme of De La Roche et al. (1980), majority of the grey and pink granites classify as alkali granite, 20 % as granite and 10 % as granodiorite. When evaluated on the basis of Shand's indices (after Maniar and Piccoli, 1989), the grey and pink granites span all three fields (peralkaline, metaluminous and peraluminous). Of the analysed grey granites, 67 % classify as peralkaline, 20 % as peraluminous and 10 % as metaluminous, while 50 % of pink granites classify as peralkaline, 30 % metaluminous and 20 % peraluminous.Keywords: petrography, nagar parker, granites, geological sciences
Procedia PDF Downloads 456