Search results for: social support on work adjustment with hearing impaired. (Master's dissertation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26641

Search results for: social support on work adjustment with hearing impaired. (Master's dissertation

1291 Neurotoxic Effects Assessment of Metformin in Danio rerio

Authors: Gustavo Axel Elizalde-Velázquez

Abstract:

Metformin is the first line of oral therapy to treat type II diabetes and is also employed as a treatment for other indications, such as polycystic ovary syndrome, cancer, and COVID-19. Recent data suggest it is the aspirin of the 21st century due to its antioxidant and anti-aging effects. However, increasingly current articles indicate its long-term consumption generates mitochondrial impairment. Up to date, it is known metformin increases the biogenesis of Alzheimer's amyloid peptides via up-regulating BACE1 transcription, but further information related to brain damage after its consumption is missing. Bearing in mind the above, this work aimed to establish whether or not chronic exposure to metformin may alter swimming behavior and induce neurotoxicity in Danio rerio adults. For this purpose, 250 Danio rerio grown-ups were assigned to six tanks of 50 L of capacity. Four of the six systems contained 50 fish, while the remaining two had 25 fish (≈1 male:1 female ratio). Every system with 50 fish was allocated one of the three metformin treatment concentrations (1, 20, and 40 μg/L), with one system as the control treatment. Systems with 25 fish, on the other hand, were used as positive controls for acetylcholinesterase (10 μg/L of Atrazine) and oxidative stress (3 μg/L of Atrazine). After four months of exposure, a mean of 32 fish (S.D. ± 2) per group of MET treatment survived, which were used for the evaluation of behavior with the Novel Tank test. Moreover, after the behavioral assessment, we aimed to collect the blood and brains of all fish from all treatment groups. For blood collection, fish were anesthetized with an MS-222 solution (150 mg/L), while for brain gathering, fish were euthanized using the hypothermic shock method (2–4 °C). Blood was employed to determine CASP3 activity and the percentage of apoptotic cells with the TUNEL assay, and brains were used to evaluate acetylcholinesterase activity, oxidative damage, and gene expression. After chronic exposure, MET-exposed fish exhibited less swimming activity when compared to control fish. Moreover, compared with the control group, MET significantly inhibited the activity of AChE and induced oxidative damage in the brain of fish. Concerning gene expression, MET significantly upregulated the expression of Nrf1, Nrf2, BAX, p53, BACE1, APP, PSEN1, and downregulated CASP3 and CASP9. Although MET did not overexpress the CASP3 gene, we saw a meaningful rise in the activity of this enzyme in the blood of fish exposed to MET compared to the control group, which we then confirmed by a high number of apoptotic cells in the TUNEL assay. To the best of our understanding, this is the first study that delivers evidence of oxidative impairment, apoptosis, AChE alteration, and overexpression of B- amyloid-related genes in the brain of fish exposed to metformin.

Keywords: AChE inhibition, CASP3 activity, NovelTank test, oxidative damage, TUNEL assay

Procedia PDF Downloads 82
1290 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 182
1289 Comparison of Two Methods of Cryopreservation of Testicular Tissue from Prepubertal Lambs

Authors: Rensson Homero Celiz Ygnacio, Marco Aurélio Schiavo Novaes, Lucy Vanessa Sulca Ñaupas, Ana Paula Ribeiro Rodrigues

Abstract:

The cryopreservation of testicular tissue emerges as an alternative for the preservation of the reproductive potential of individuals who still cannot produce sperm; however, they will undergo treatments that may affect their fertility (e.g., chemotherapy). Therefore, the present work aims to compare two cryopreservation methods (slow freezing and vitrification) in testicular tissue of prepubertal lambs. For that, to obtain the testicular tissue, the animals were castrated and the testicles were collected immediately in a physiological solution supplemented with antibiotics. In the laboratory, the testis was split into small pieces. The total size of the testicular fragments was 3×3x1 mm³ and was placed in a dish contained in Minimum Essential Medium (MEM-HEPES). The fragments were distributed randomly into non-cryopreserved (fresh control), slow freezing (SF), and vitrified. To SF procedures, two fragments from a given male were then placed in a 2,0 mL cryogenic vial containing 1,0 mL MEM-HEPES supplemented with 20% fetal bovine serum (FBS) and 20% dimethylsulfoxide (DMSO). Tubes were placed into a Mr. Frosty™ Freezing container with isopropyl alcohol and transferred to a -80°C freezer for overnight storage. On the next day, each tube was plunged into liquid nitrogen (NL). For vitrification, the ovarian tissue cryosystem (OTC) device was used. Testicular fragments were placed in the OTC device and exposed to the first vitrification solution composed of MEM-HEPES supplemented with 10 mg/mL Bovine Serum Albumin (BSA), 0.25 M sucrose, 10% Ethylene glycol (EG), 10% DMSO and 150 μM alpha-lipoic acid for four min. The VS1 was discarded and then the fragments were submerged into a second vitrification solution (VS2) containing the same composition of VS1 but 20% EG and 20% DMSO. VS2 was then discarded and each OTC device containing up to four testicular fragments was closed and immersed in NL. After the storage period, the fragments were removed from the NL, kept at room temperature for one min and then immersed at 37 °C in a water bath for 30 s. Samples were warmed by sequentially immersing in solutions of MEM-HEPES supplemented with 3 mg/mL BSA and decreasing concentrations of sucrose. Hematoxylin-eosin staining to analyze the tissue architecture was used. The score scale used was from 0 to 3, classified with a score 0 representing normal morphologically, and 3 were considered a lot of alteration. The histomorphological evaluation of the testicular tissue shows that when evaluating the nuclear alteration (distinction of nucleoli and condensation of nuclei), there are no differences when using slow freezing with respect to the control. However, vitrification presents greater damage (p <0.05). On the other hand, when evaluating the epithelial alteration, we observed that the freezing showed scores statistically equal to the control in variables such as retraction of the basement membrane, formation of gaps and organization of the peritubular cells. The results of the study demonstrated that cryopreservation using the slow freezing method is an excellent tool for the preservation of pubertal testicular tissue.

Keywords: cryopreservation, slow freezing, vitrification, testicular tissue, lambs

Procedia PDF Downloads 171
1288 Factors Influencing Intention to Engage in Long-term Care Services among Nursing Aide Trainees and the General Public

Authors: Ju-Chun Chien

Abstract:

Rapid aging and depopulation could lead to serious problems, including workforce shortages and health expenditure costs. The current and predicted future LTC workforce shortages could be a real threat to Taiwan’s society. By means of comparison of data from 144 nursing aide trainees and 727 general public, the main purpose of the present study was to determine whether there were any notable differences between the two groups toward engaging in LTC services. Moreover, this study focused on recognizing the attributes of the general public who had the willingness to take LTC jobs but continue to ride the fence. A self-developed questionnaire was designed based on Ajzen’s Theory of Planned Behavior model. After conducting exploratory factor analysis (EFA) and reliability analysis, the questionnaire was a reliable and valid instrument for both nursing aide trainees and the general public. The main results were as follows: Firstly, nearly 70% of nursing aide trainees showed interest in LTC jobs. Most of them were middle-aged female (M = 46.85, SD = 9.31), had a high school diploma or lower, had unrelated work experience in healthcare, and were mostly unemployed. The most common reason for attending the LTC training program was to gain skills in a particular field. The second most common reason was to obtain the license. The third and fourth reasons were to be interested in caring for people and to increase income. The three major reasons that might push them to leave LTC jobs were physical exhaustion, payment is bad, and being looked down on. Secondly, the variables that best-predicted nursing aide trainees’ intention to engage in LTC services were having personal willingness, perceived behavior control, with high school diploma or lower, and supported from family and friends. Finally, only 11.80% of the general public reported having interest in LTC jobs (the disapproval rating was 50% for the general public). In comparison to nursing aide trainees who showed interest in LTC settings, 64.8% of the new workforce for LTC among the general public was male and had an associate degree, 54.8% had relevant healthcare experience, 67.1% was currently employed, and they were younger (M = 32.19, SD = 13.19) and unmarried (66.3%). Furthermore, the most commonly reason for the new workforce to engage in LTC jobs were to gain skills in a particular field. The second priority was to be interested in caring for people. The third and fourth most reasons were to give back to society and to increase income, respectively. The top five most commonly reasons for the new workforce to quitting LTC jobs were listed as follows: physical exhaustion, being looked down on, excessive working hours, payment is bad, and excessive job stress.

Keywords: long-term care services, nursing aide trainees, Taiwanese people, theory of planned behavior

Procedia PDF Downloads 154
1287 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 354
1286 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context

Authors: Andrea Fiorista

Abstract:

The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.

Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL

Procedia PDF Downloads 85
1285 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 125
1284 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection

Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili

Abstract:

The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).

Keywords: EM induction sensing, detector, plastic mines, remote sensing

Procedia PDF Downloads 144
1283 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago

Authors: Miguel Angel Calvo Salve

Abstract:

This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.

Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage

Procedia PDF Downloads 81
1282 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 123
1281 The Importance of the Phases of Information, Diagnosis, Planning, Intervention and Management in a Historic Center

Authors: Giovanni Duran Polo

Abstract:

Demonstrate the importance of the stages such as Information, Diagnosis, Management, and Intervention is fundamental to have a historical, live, and quality inhabited center. One of the major actions to take is to promote the concept of the management of a historic center with harmonious development. For that, concerned actors should strengthen the concept that said historic center may be the neighborhood of all and for all. The centers of historical cities, presented as any other urban area, social, environmental issues etc; yet they get added value that have no other city neighborhoods. The equity component, either by the urban plan, or environmental quality offered properties of architectural, landscape or some land uses are the differentiating element, while the tool that makes them attractive face pressure exerted by new housing developments or shopping centers. That's why through the experience of working in historical centers, they are declared the actions in heritage areas. This paper will show how the encounter with each of these places are trying to take the phases of information, to gather all the data needed to be closer to the territory with specific data, diagnosis; which allowed the actors to see what state they were, felt how the heart is related to the rest of the city, show what problems affected the situation and what potential it had to compete in a global market. Also, to discuss the importance of the organization, as it is legal and normative basis for it have an order and a concept, when you know what can and what cannot, in an area where the citizen has many myth or history, when he wanted to intervene in protected buildings. It is also appropriate to show how it could develop the intervention phase, where the shares on the tangible elements and intervention for the protection of the heritage property are executed. The management is the final phase which will carry out all that was raised on paper, it's time to orient, explain, persuade, promote, and encourage citizens to take care of the heritage. It is profitable and also an obligation and it is not an insurmountable burden. It has to be said this is the time to pull all the cards to make the historical center and heritage becoming more alive today. It is the moment to make it more inhabited and to transformer it into a quality place, so citizens will cherish and understand the importance of such a place. Inhabited historical centers, endowments and equipment required, with trade quality, with constant cultural offer, with well-preserved buildings and tidy, modern and safe public spaces are always attractive for tourism, but first of all, the place should be conceived for citizens, otherwise everything will be doomed to failure.

Keywords: development, diagnosis, heritage historic center, intervention, management, patrimony

Procedia PDF Downloads 394
1280 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive

Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu

Abstract:

Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.

Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings

Procedia PDF Downloads 121
1279 Pakistan Nuclear Security: Threats from Non-State Actors

Authors: Jennifer Wright

Abstract:

The recent rise of powerful terrorist groups such as ISIS and Al-Qaeda brings up concerns about nuclear terrorism as well as a focus on nuclear security, specifically the physical security of nuclear weapons and fissile material storage sites in countries where powerful nonstate actors are present. Particularly because these non-state actors, who lack their own sovereign territory, cannot be ‘deterred’ in the traditional sense. In light of the current threat environment, it’s necessary to now rethink these strategies in the 21st century – a multipolar world with the presence of powerful non-state actors. As a country in the spotlight for its low ranking on the Nuclear Threat Initiative’s (NTI) Nuclear Security Index, Pakistan is a relevant example to explore the question of whether the presence of non-state actors poses a real risk to nuclear security today. It’s necessary to take a look at their nuclear security policies to determine if they’re robust enough to deal with political instability and violence in the country. After carrying out interviews with experts in May 2017 in Islamabad on nuclear security and nuclear terrorism, this paper aims to highlight findings by providing a Pakistan-centric view on the subject and give experts there a chance to counter criticism. Western media would have us fearful of nuclear security mechanisms in Pakistan after reports that areas such as cybersecurity and accounting and control of materials are weak, as well as sensitive nuclear material being transported in unmarked, unguarded vehicles. Also reported are cases where terrorist groups carried out targeted attacks against Pakistani military bases or secure sites where nuclear material is stored. One specific question asked of each interviewee in Islamabad was Do you feel the threat of nuclear terrorism calls into question the reliance on deterrence? Their responses will be elaborated on in the longer paper, but overall they demonstrate views that deterrence still serves a purpose for state-to-state security strategy, but not for a state in countering nonstate threats. If nuclear security is lax enough for these non-state actors to get their hands on either an intact nuclear weapon or enough military-grade fissile material to build a nuclear weapon, then what would stop them from launching a nuclear attack? As deterrence is a state-centric strategy, it doesn’t work to deter non-state actors from carrying out an attack on another state, as they lack their own territory, and as such, are not fearful of a reprisal attack. Deterrence will need to be addressed, and its relevance analyzed to determine its utility in the current security environment. The aim of this research is to demonstrate the real risk of nuclear terrorism by pointing to weaknesses in global nuclear security, particularly in Pakistan. The research also aims to provoke thought on the weaknesses of deterrence as a whole. Original thinking is needed as we attempt to adequately respond to the 21st century’s current threat environment.

Keywords: deterrence, non-proliferation, nuclear security, nuclear terrorism

Procedia PDF Downloads 225
1278 Determinants of Profit Efficiency among Poultry Egg Farmers in Ondo State, Nigeria: A Stochastic Profit Function Approach

Authors: Olufunke Olufunmilayo Ilemobayo, Barakat. O Abdulazeez

Abstract:

Profit making among poultry egg farmers has been a challenge to efficient distribution of scarce farm resources over the years, due majorly to low capital base, inefficient management, technical inefficiency, economic inefficiency, thus poultry egg production has moved into an underperformed situation, characterised by low profit margin. Though previous studies focus mainly on broiler production and efficiency of its production, however, paucity of information exist in the areas of profit efficiency in the study area. Hence, determinants of profit efficiency among poultry egg farmers in Ondo State, Nigeria were investigated. A purposive sampling technique was used to obtain primary data from poultry egg farmers in Owo and Akure local government area of Ondo State, through a well-structured questionnaire. socio-economic characteristics such as age, gender, educational level, marital status, household size, access to credit, extension contact, other variables were input and output data like flock size, cost of feeder and drinker, cost of feed, cost of labour, cost of drugs and medications, cost of energy, price of crate of table egg, price of spent layers were variables used in the study. Data were analysed using descriptive statistics, budgeting analysis, and stochastic profit function/inefficiency model. Result of the descriptive statistics shows that 52 per cent of the poultry farmers were between 31-40 years, 62 per cent were male, 90 per cent had tertiary education, 66 per cent were primarily poultry farmers, 78 per cent were original poultry farm owners and 55 per cent had more than 5 years’ work experience. Descriptive statistics on cost and returns indicated that 64 per cent of the return were from sales of egg, while the remaining 36 per cent was from sales of spent layers. The cost of feeding take the highest proportion of 69 per cent of cost of production and cost of medication the lowest (7 per cent). A positive gross margin of N5, 518,869.76, net farm income of ₦ 5, 500.446.82 and net return on investment of 0.28 indicated poultry egg production is profitable. Equipment’s cost (22.757), feeding cost (18.3437), labour cost (136.698), flock size (16.209), drug and medication cost (4.509) were factors that affecting profit efficiency, while education (-2.3143), household size (-18.4291), access to credit (-16.027), and experience (-7.277) were determinant of profit efficiency. Education, household size, access to credit and experience in poultry production were the main determinants of profit efficiency of poultry egg production in Ondo State. Other factors that affect profit efficiency were cost of feeding, cost of labour, flock size, cost of drug and medication, they positively and significantly influenced profit efficiency in Ondo State, Nigeria.

Keywords: cost and returns, economic inefficiency, profit margin, technical inefficiency

Procedia PDF Downloads 127
1277 Nitrate Photoremoval in Water Using Nanocatalysts Based on Ag / Pt over TiO2

Authors: Ana M. Antolín, Sandra Contreras, Francesc Medina, Didier Tichit

Abstract:

Introduction: High levels of nitrates (> 50 ppm NO3-) in drinking water are potentially risky to human health. In the recent years, the trend of nitrate concentration in groundwater is rising in the EU and other countries. Conventional catalytic nitrate reduction processes into N2 and H2O lead to some toxic intermediates and by-products, such as NO2-, NH4+, and NOx gases. Alternatively, photocatalytic nitrate removal using solar irradiation and heterogeneous catalysts is a very promising and ecofriendly technique. It has been scarcely performed and more research on highly efficient catalysts is still needed. In this work, different nanocatalysts supported on Aeroxide Titania P25 (P25) have been prepared varying: 0.5-4 % wt. Ag); Pt (2, 4 % wt.); Pt precursor (H2PtCl6/K2PtCl6); and impregnation order of both metals. Pt was chosen in order to increase the selectivity to N2 and decrease that to NO2-. Catalysts were characterized by nitrogen physisorption, X-Ray diffraction, UV-visible spectroscopy, TEM and X Ray-Photoelectron Spectroscopy. The aim was to determine the influence of the composition and the preparation method of the catalysts on the conversion and selectivity in the nitrate reduction, as well as going through an overall and better understanding of the process. Nanocatalysts synthesis: For the mono and bimetallic catalysts preparation, wise-drop wetness impregnation of the precursors (AgNO3, H2PtCl6, K2PtCl6) followed by a reduction step (NaBH4) was used to obtain the metal colloids. Results and conclusions: Denitration experiments were performed in a 350 mL PTFE batch reactor under inert standard operational conditions, ultraviolet irradiations (λ=254 nm (UV-C); λ=365 nm (UV-A)), and presence/absence of hydrogen gas as a reducing agent, contrary to most studies using oxalic or formic acid. Samples were analyzed by Ionic Chromatography. Blank experiments using respectively P25 (dark conditions), hydrogen only and UV irradiations without hydrogen demonstrated a clear influence of the presence of hydrogen on nitrate reduction. Also, they demonstrated that UV irradiation increased the selectivity to N2. Interestingly, the best activity was obtained under ultraviolet lamps, especially at a closer wavelength to visible light irradiation (λ = 365 nm) and H2. 2% Ag/P25 leaded to the highest NO3- conversion among the monometallic catalysts. However, nitrite quantities have to be diminished. On the other hand, practically no nitrate conversion was observed with the monometallics based on Pt/P25. Therefore, the amount of 2% Ag was chosen for the bimetallic catalysts. Regarding the bimetallic catalysts, it is observed that the metal impregnation order, amount and Pt precursor highly affects the results. Higher selectivity to the desirable N2 gas is obtained when Pt was firstly added, especially with K2PtCl6 as Pt precursor. This suggests that when Pt is secondly added, it covers the Ag particles, which are the most active in this reaction. It could be concluded that Ag allows the nitrate reduction step to nitrite, and Pt the nitrite reduction step toward the desirable N2 gas.

Keywords: heterogeneous catalysis, hydrogenation, nanocatalyst, nitrate removal, photocatalysis

Procedia PDF Downloads 269
1276 Examining the Links between Fish Behaviour and Physiology for Resilience in the Anthropocene

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

Changes in behaviour and physiology are the most important responses of marine life to anthropogenic impacts such as climate change and over-fishing. Behavioural changes (such as a shift in distribution or changes in phenology) can ensure that a species remains in an environment suited for its optimal physiological performance. However, if marine life is unable to shift their distribution, they are reliant on physiological adaptation (either by broadening their metabolic curves to tolerate a range of stressors or by shifting their metabolic curves to maximize their performance at extreme stressors). However, since there are links between fish physiology and behaviour, changes to either of these traits may have reciprocal interactions. This paper reviews the current knowledge of the links between the behaviour and physiology of fishes, discusses these in the context of exploitation and climate change, and makes recommendations for future research needs. The review revealed that our understanding of the links between fish behaviour and physiology is rudimentary. However, both are hypothesized to be linked to stress responses along the hypothalamic pituitary axis. The link between physiological capacity and behaviour is particularly important as both determine the response of an individual to a changing climate and are under selection by fisheries. While it appears that all types of capture fisheries are likely to reduce the adaptive potential of fished populations to climate stressors, angling, which is primarily associated with recreational fishing, may induce fission of natural populations by removing individuals with bold behavioural traits and potentially the physiological traits required to facilitate behavioural change. Future research should focus on assessing how the links between physiological capacity and behaviour influence catchability, the response to climate change drivers, and post-release recovery. The plasticity of phenotypic traits should be examined under a range of stressors of differing intensity in several species and life history stages. Future studies should also assess plasticity (fission or fusion) in the phenotypic structuring of social hierarchy and how this influences habitat selection. Ultimately, to fully understand how physiology is influenced by the selective processes driven by fisheries, long-term monitoring of the physiological and behavioural structure of fished populations, their fitness, and catch rates are required.

Keywords: climate change, metabolic shifts, over-fishing, phenotypic plasticity, stress response

Procedia PDF Downloads 114
1275 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin

Authors: Ankush Raina, Ankush Anand

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: boundary lubrication, copper oxide, friction, nano diamond

Procedia PDF Downloads 119
1274 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis

Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali

Abstract:

Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.

Keywords: distribution, thelaria, LAMP, primer sequences, PCR

Procedia PDF Downloads 101
1273 Garnet-based Bilayer Hybrid Solid Electrolyte for High-Voltage Cathode Material Modified with Composite Interface Enabler on Lithium-Metal Batteries

Authors: Kumlachew Zelalem Walle, Chun-Chen Yang

Abstract:

Solid-state lithium metal batteries (SSLMBs) are considered promising candidates for next-generation energy storage devices due to their superior energy density and excellent safety. However, recent findings have shown that the formation of lithium (Li) dendrites in SSLMBs still exhibits a terrible growth ability, which makes the development of SSLMBs have to face the challenges posed by the Li dendrite problem. In this work, an inorganic/organic mixture coating material (g-C3N4/ZIF-8/PVDF) was used to modify the surface of lithium metal anode (LMA). Then the modified LMA (denoted as g-C₃N₄@Li) was assembled with lithium nafion (LiNf) coated commercial NCM811 (LiNf@NCM811) using a bilayer hybrid solid electrolyte (Bi-HSE) that incorporated 20 wt.% (vs. polymer) LiNf coated Li6.05Ga0.25La3Zr2O11.8F0.2 ([email protected]) filler faced to the positive electrode and the other layer with 80 wt.% (vs. polymer) filler content faced to the g-C₃N₄@Li. The garnet-type Li6.05Ga0.25La3Zr2O11.8F0.2 (LG0.25LZOF) solid electrolyte was prepared via co-precipitation reaction process from Taylor flow reactor and modified using lithium nafion (LiNf), a Li-ion conducting polymer. The Bi-HSE exhibited high ionic conductivity of 6.8  10–4 S cm–1 at room temperature, and a wide electrochemical window (0–5.0 V vs. Li/Li+). The coin cell was charged between 2.8 to 4.5 V at 0.2C and delivered an initial specific discharge capacity of 194.3 mAh g–1 and after 100 cycles it maintained 81.8% of its initial capacity at room temperature. The presence of a nano-sheet g-C3N4/ZIF-8/PVDF as a composite coating material on the LMA surface suppress the dendrite growth and enhance the compatibility as well as the interfacial contact between anode/electrolyte membrane. The g-C3N4@Li symmetrical cells incorporating this hybrid electrolyte possessed excellent interfacial stability over 1000 h at 0.1 mA cm–2 and a high critical current density (1 mA cm–2). Moreover, the in-situ formation of Li3N on the solid electrolyte interface (SEI) layer as depicted from the XPS result also improves the ionic conductivity and interface contact during the charge/discharge process. Therefore, these novel multi-layered fabrication strategies of hybrid/composite solid electrolyte membranes and modification of the LMA surface using mixed coating materials have potential applications in the preparation of highly safe high-voltage cathodes for SSLMBs.

Keywords: high-voltage cathodes, hybrid solid electrolytes, garnet, graphitic-carbon nitride (g-C3N4), ZIF-8 MOF

Procedia PDF Downloads 64
1272 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence

Authors: Gergely G. Karacsony

Abstract:

Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.

Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction

Procedia PDF Downloads 236
1271 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates

Authors: Takashi Mitsuishi

Abstract:

Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).

Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation

Procedia PDF Downloads 360
1270 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics

Procedia PDF Downloads 257
1269 A Study of Lapohan Traditional Pottery Making in Selakan Island, Semporna Sabah: An Initial Framework

Authors: Norhayati Ayob, Shamsu Mohamad

Abstract:

This paper aims to provide an initial background of the process of making traditional ceramic pottery, focusing on the materials and the influence of culture heritage. Ceramic pottery is one of the hallmarks of Sabah’s heirloom, not only use as cooking and storage containers but also closely linked with folk cultures and heritage. The Bajau Laut ethnic community of Semporna or better known as the Sea Gypsies, mostly are boat dwellers and work as fishermen in the coast. This ethnic community is famous for their own artistic traditional heirloom, especially the traditional hand-made clay stove called Lapohan. It is found that in the daily life of Bajau Laut community, Lapohan (clay stove) is used to prepare the meal and as a food warmer while they are at the sea. Besides, Lapohan pottery conveys symbolic meaning of natural objects, which portrays the identity, and values of Bajau Laut community. It is acknowledged that the basic process of making potterywares was much the same for people all across the world, nevertheless, it is crucial to consider that different ethnic groups may have their own styles and choices of raw materials. Furthermore, it is still unknown why and how the Bajau Laut ethnic of Semporna get started making their own pottery and to survive until today by heavily depending on the raw materials available in Semporna. In addition, the emergent problem faced by the pottery maker in Sabah is the absence of young successor to continue the heirloom legacy. Therefore, this research aims to explore the traditional pottery making in Sabah, by investigating the background history of Lapohan pottery and to propose the classification of Lapohan based on design and motifs of traditional pottery that will be recognised throughout the study. It is postulated that different techniques and forms of making traditional pottery may produce different types of pottery in terms of surface decoration, shape, and size that portrays different cultures. This study will be conducted at Selakan Island, Semporna, which is the only location that still has Lapohan making. This study is also based on the chronological process of making pottery and taboos of the process of preparing the clay, forming, decoration technique, motif application and firing techniques. The relevant information for the study will be gathered from field study, including observation, in-depth interview and video recording. In-depth interviews will be conducted with several potters and the conversation and pottery making process will be recorded in order to understand the actual process of making Lapohan. The findings hope to provide several types of Lapohan based on different designs and cultures, for example, the one with flat-shape design or has round-shape on the top of clay stove will be labeled with suitable name based on their culture. In conclusion, it is hoped that this study will contribute to conservation for traditional pottery making in Sabah as well as to preserve their culture and heirloom for future generations.

Keywords: Bajau Laut, culture, Lapohan, traditional pottery

Procedia PDF Downloads 182
1268 Whistleblowing a Contemporary Topic Concerning Businesses

Authors: Andreas Kapardis, Maria Krambia-Kapardis, Sofia Michaelides-Mateou

Abstract:

Corruption and economic crime is a serious problem affecting the sustainability of businesses in the 21st century. Nowadays, many corruption or fraud cases come to light thanks to whistleblowers. This article will first discuss the concept of whistleblowing as well as some relevant legislation enacted around the world. Secondly, it will discuss the findings of a survey of whistleblowers or could-have-been whistleblowers. Finally, suggestions for the development of a comprehensive whistleblowing framework will be considered. Whistleblowing can be described as expressing a concern about a wrongdoing within an organization, such as a corporation, an association, an institution or a union. Such concern must be in the public interest and in good faith and should relate to the cover up of matters that could potentially result in a miscarriage of justice, a crime, criminal offence and threats to health and safety. Whistleblowing has proven to be an effective anti-corruption mechanism and a powerful tool that helps deterring fraud, violations, and malpractices within organizations, corporations and the public sector. Research in the field of whistleblowing has concentrated on the reasons for whistleblowing and financial bounties; the effectiveness of whistleblowing; whistleblowing being a prosocial behavior with a psychological perspective and consequences; as a tool in protecting shareholders, saving lives and billions of dollars of public funds. Whilst, no other study of whistleblowing has been carried out on whistleblowers or intended whistleblowers. The study reported in the current paper analyses the findings of 74 whistleblowers or intended whistleblowers, the reasons behind their decision to blow the whistle, or not to proceed to blow the whistle and any regrets they may have had. In addition a profile of a whistleblower is developed concerning their age, gender, marital and family status and position in an organization. Lessons learned from the intended whistleblowers and in response to the questions if they would be willing to blow the whistle again show that enacting legislation to protect the whistleblower is not enough. Similarly, rewarding the whistleblower does not appear to provide the whistleblower with an incentive since the majority noted that “work ethics is more important than financial rewards”. We recommend the development of a comprehensive and holistic framework for the protection of the whistleblower and to ensure that remedial actions are immediately taken once a whistleblower comes forward. The suggested framework comprises (a) hard legislation in ensuring the whistleblowers follow certain principles when blowing the whistle and, in return, are protected for a period of 5 years from being fired, dismissed, bullied, harassed; (b) soft legislation in establishing an agency to firstly ensure psychological and legal advice is provided to the whistleblowers and secondly any required remedial action is immediately taken to avert the undesirable events reported by a whistleblower from occurring and, finally; (c) mechanisms to ensure the coordination of actions taken.

Keywords: whistleblowing, business ethics, legislation, business

Procedia PDF Downloads 306
1267 Secure Optimized Ingress Filtering in Future Internet Communication

Authors: Bander Alzahrani, Mohammed Alreshoodi

Abstract:

Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.

Keywords: forwarding identifier, filling factor, information centric network, topology manager

Procedia PDF Downloads 150
1266 An Empty Canvas is Full

Authors: Joonha Park

Abstract:

This essay examines the Soviet Artist Pavel Korin’s artistic pursuit towards his life-long project, “Requiem/Passing of the Rus,” framing the funeral of Tikhon, the last great defender of the Russian orthodox Church during the Great purge, as the final moment of “Rus,” which is the identity of the Russian people that built up in the 1000 year of history behind Russian Orthodoxy. Korin’s project remains in the form of a series of 29 man-sized portraits and a monumental blank canvas. Born in a family dedicated to iconography, Korin witnessed the historic drama during Stalin’s terror; therefore, he tried to convey the nation’s mourning for the disappearance of “Rus” and disapproval of the Soviet notion of atheism. Yet, due to Korin’s success as a state artist, many believed that the political pressure led Korin to give up his belief and controversy arose over the fact that Korin left the canvas blank. The empty 40-square-meter canvas, which remains untouched in his studio since 1930, supports this theory to an extent. However, resources such as Korin’s notes, primary accounts from his fellow Soviet Artists, and testimonies from his wife suggested that this assumption is incorrect. Moreover, Korin’s uninterrupted relationship with the church and the religious attributes in his commissioned works were brought up as evidence of Korin’s continued belief. The empty canvas not only represents Korin’s discontentment towards the repression and the hardships that the Orthodox Church experienced, but also depicts the identity that coexisted with the Church in order to bequeath this idea to future generations. The faultless canvas surrounded by the striking 29 portraits is a symbol of the highest spirit, similar to that of the iconography paintings placed in every Russian house that unites the Russian people till this day, therefore one can deduce that the legacy of “Requiem” is still relevant to the Russian people even under freedom of religious expression. Consequently, “Requiem” was on display at the Tretyakov Gallery for the first time in public in 2013 even though Korin started creating this piece in 1925, extolling Korin not only as an artist but also as a historian; by recording the turmoil of the Great Oppression, Korin exhibited the social responsibility universal to artists across time and space. In this essay, the legacy Korin left behind, both to his contemporaries and his posterity is reevaluated through the lens of his works, unfinished as they were.

Keywords: Pavel Korin, Art History, Art, Russia, Soviet Union, Requiem, Russian orthodox church, Treytyakov gallery, contemporary art, socialist realism, Maxim Gorky

Procedia PDF Downloads 416
1265 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium

Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas

Abstract:

Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.

Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides

Procedia PDF Downloads 435
1264 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 76
1263 The Potential of On-Demand Shuttle Services to Reduce Private Car Use

Authors: B. Mack, K. Tampe-Mai, E. Diesch

Abstract:

Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.

Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport

Procedia PDF Downloads 181
1262 The Concept of the Family and Its Principles from the Perspective of International Human Rights Instruments

Authors: Mahya Saffarinia

Abstract:

The family has existed as a natural unit of human relations from the beginning of creation and life of human society until now and has been the core of the relationship between women, men, and children. However, in the field of human relations, the definition of family, related rights and duties, principles governing the family, the impact of the family on other individual or social phenomena and various other areas have changed over time, especially in recent decades, and the subject has now become one of the important categories of studies including interdisciplinary studies. It is difficult to provide an accurate and comprehensive definition of the family, and in the context of different cultures, customs, and legal systems, different definitions of family are presented. The meaning of legal principles governing the family is the general rules of law that determine the organization of different dimensions of the family, and dozens of partial rules are inferred from it or defined in the light of these general rules. How each of these principles was formed has left its own detailed history. In international human rights standards, which have been gradually developed over the past 72 years, numerous data can be found that in some way represent a rule in the field of family law or provide an interpretation of existing international rules which also address obligations of governments in the field of family. Based on a descriptive-analytical method and by examining human rights instruments, the present study seeks to explain the effective elements in defining and the principles governing the family. This article makes it clear that international instruments do not provide a clear definition of the family and that governments are empowered to define the family in terms of the cultural context of their community. But at the same time, it has been stipulated that governments do not have the exclusive authority to provide this definition, and certain principles should be considered as essential elements. Also, 7 principles have been identified as general legal rules governing all international human rights instruments related to the family, such as the principle of voluntary family formation and the prohibition of forced marriage, and the principle of respecting human dignity for all family members. Each of these 7 principles has led to different debates, and the acceptance or non-acceptance of each of them has different consequences in the rights and duties related to the family and the relations between its members and even the family's interactions with others and society. One of the consequences of the validity of these principles in family-related human rights standards is that many of the existing legal systems of countries in some cases need to be amended and their regulations revised, and some established cultural traditions in societies that are considered inhumane in terms of these principles need to be modified and changed. Of course, this process of governing the principles derived from human rights standards over the family also has vulnerabilities and misinterpretations that should not be neglected.

Keywords: family, human rights, international instruments, principles

Procedia PDF Downloads 175